How to Knit Your Own Markov Blanket: Resisting the Second Law with Metamorphic Minds
نویسنده
چکیده
Howhy (2016, this volume) argues there is a tension between the free energy principle and leading depictions of mind as embodied, enactive, and extended (so-called ‘EEE cognition’). The tension is traced to the importance, in free energy formulations, of a conception of mind and agency that depends upon the presence of a ‘Markov blanket’ demarcating the agent from the surrounding world. In what follows I show that the Markov blanket considerations do not, in fact, lead to the kinds of tension that Howhy depicts. On the contrary, they actively favour the EEE story. This is because the Markov property, as exemplified in biological agents, picks out neither a unique nor a stationary boundary. It is this multiplicity and mutability– rather than the absence of agent-environment boundaries as such that EEE cognition celebrates. “My cousin has great changes coming – one day he’ll wake with wings” Cousin Caterpillar (The Incredible String Band) 1. The Markov Blanket Conception of Mind Markov blankets are named after Andrey Markov (1856-1922), a mathematician whose seminal work explored abstract systems that remember their past trajectories only insofar as they store a single (current) value. In such systems (Markov chains – see Norris (1998)) the next state depends only on the value of the current state. This is the so-called Markov property. For this reason, such systems are sometimes said to be ‘memoryless’. Now consider a complex system composed of many interacting nodes (variables). Pearl (1988) introduced the term ‘Markov blanket’ to describe the set of nodes such that, for some given node X, the behavior of X could be fully predicted just by knowing the states of those other nodes. The states of those neighbouring nodes thus fix (statistically, not causally) the state of the target
منابع مشابه
Learning Bayesian Network Structure using Markov Blanket in K2 Algorithm
A Bayesian network is a graphical model that represents a set of random variables and their causal relationship via a Directed Acyclic Graph (DAG). There are basically two methods used for learning Bayesian network: parameter-learning and structure-learning. One of the most effective structure-learning methods is K2 algorithm. Because the performance of the K2 algorithm depends on node...
متن کاملP35: How to Manage Anxiety
Anxiety is a mental state that is elicited in anticipation of threat or potential threat. Sensations of anxiety are a normal part of human experience, but excessive or inappropriate anxiety can become an illness. Anxiety is part of the normal human experience. We may speculate that it served human survival during evolution by enhancing preparedness and alertness. However, anxious manifestations...
متن کاملLocal Causal and Markov Blanket Induction for Causal Discovery and Feature Selection for Classification Part II: Analysis and Extensions
In part I of this work we introduced and evaluated the Generalized Local Learning (GLL) framework for producing local causal and Markov blanket induction algorithms. In the present second part we analyze the behavior of GLL algorithms and provide extensions to the core methods. Specifically, we investigate the empirical convergence of GLL to the true local neighborhood as a function of sample s...
متن کاملCHI-SQUARED DISTANCE AND METAMORPHIC VIRUS DETECTION A Thesis
CHI-SQUARED DISTANCE AND METAMORPHIC VIRUS DETECTION by Annie H. Toderici Malware are programs that are designed with a malicious intent. Metamorphic malware change their internal structure each generation while still maintaining their original behavior. As metamorphic malware become more sophisticated, it is important to develop efficient and accurate detection techniques. Current commercial a...
متن کاملMarkov Blanket Discovery in Positive-Unlabelled and Semi-supervised Data
The importance of Markov blanket discovery algorithms is twofold: as the main building block in constraint-based structure learning of Bayesian network algorithms and as a technique to derive the optimal set of features in filter feature selection approaches. Equally, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully t...
متن کامل